18 research outputs found

    Utilisation de l'Apparence pour le Rendu et l'édition efficaces de scènes capturées

    Get PDF
    Computer graphics strives to render synthetic images identical to real photographs. Multiple rendering algorithms have been developed for the better part of the last half-century. Traditional algorithms use 3D assets manually generated by artists to render a scene. While the initial scenes were quite simple, the field has developed complex representations of geometry, material and lighting: the three basic components of a 3D scene. Generating such complex assets is hard and requires significant time and skills by professional 3D artists. In addition to asset generation, the rendering algorithms themselves involve complex simulation techniques to solve for global light transport in a scene which costs more time.As the ease of capturing photographs improved, Image-based Rendering (IBR) emerged as an alternative to traditional rendering. Using captured images as input became much faster than generating traditional scene assets. Initial IBR algorithms focused on creating a scene model using the input images to interpolate or warp them and enable free-viewpoint navigation of captured scenes. With time the scene models became more complex and using a geometric proxy computed from the input images became an integral part of IBR. Today using a mesh reconstructed using Structure-from-Motion (SfM) and Multi-view Stereo (MVS) techniques is widely used in IBR even though they introduce significant artifacts due to noisy reconstruction.In this thesis we first propose a novel image-based rendering algorithm, which focuses on rendering a captured scene with good quality at interactive frame rates}. We study different artifacts from previous IBR algorithms and propose an algorithm which builds upon previous work to remove such artifacts. The algorithm utilizes surface appearance in order to treat view-dependent regions differently than diffuse regions. Our Hybrid-IBR algorithm performs favorably against classical and modern IBR approaches for a wide variety of scenes in terms of quality and/or speed.While IBR provides solutions to render a scene, editing them is hard. Editing scenes require estimating a scene's geometry, material appearance and illumination. As our second contribution \textbf{we explicitly estimate \emph{scene-scale} material parameters from a set of captured photographs to enable scene editing}. While commercial photogrammetry solutions recover diffuse texture to aid 3D artists in generating material assets manually, we aim to \emph{automatically} create material texture atlases from captured images of a scene. We take advantage of the visual cues provided by the multi-view observations. Feeding it to a Convolutional Neural Network (CNN) we obtain material maps for each view. Using the predicted maps we create multi-view consistent material texture atlases by aggregating the information in texture space. Using our automatically generated material texture atlases we demonstrate relighting and object insertion in real scenes.Learning-based tasks require large amounts of data with variety to learn the task efficiently. Using synthetic datasets to train is the norm but using traditional rendering to render large datasets is time consuming providing limited variability. We propose \textbf{a new neural rendering-based approach that learns a neural scene representation with variability and use it to generate large amounts of data at a significantly faster rate on the fly}. We demonstrate the advantage of using neural rendering as compared to traditional rendering in terms of speed of generating dataset as well as learning auxiliary tasks given the same computational budget.L’informatique graphique a pour but de rendre des images de synthèse semblables à des photographies. Plusieurs algorithmes de rendu ont été développés au cours du dernier demi-siècle, principalement pour restituer des scènes à base d'éléments 3D créés par des artistes. Alors que les scènes initiales étaient assez simples, des représentations plus complexes de la géométrie, des matériaux et de l'éclairage ont été développés. Créer des scènes aussi complexes nécessite beaucoup de travail et de compétences de la part d'artistes 3D professionnels. Au même temps, les algorithmes de rendu impliquent des techniques de simulation complexes coûteuses en temps, pour résoudre le transport global de la lumière dans une scène.Avec la popularité grandissante de la photo numérique, le rendu basé image (IBR) a émergé comme une alternative au rendu traditionnel. Avec cette approche, l'utilisation de photos comme données d'entrée est devenue beaucoup plus rapide que la génération de scènes classiques. Les algorithmes IBR se sont d’abord concentrés sur la restitution de scènes pour en permettre une exploration libre. Au fil du temps, les modèles de scène sont devenus plus complexes et l'utilisation d'un proxy géométrique inféré à partir d’images est devenue la norme. Aujourd'hui, l'utilisation d'un maillage reconstruit à l'aide des techniques Structure-from-Motion (SfM) et Multi-view Stereo (MVS) est courante en IBR, bien que cette utilisation introduit des artefacts importants. Nous proposons d'abord un nouvel algorithme de rendu basé image, qui se concentre sur le rendu de qualité et en temps interactif d'une scène capturée}. Nous étudions différentes faiblesses des travaux précédents et proposons un algorithme qui s'appuie sur ces travaux pour obtenir de meilleurs résultats. Notre algorithme se base sur l'apparence de la surface pour traiter les régions dont l'apparence dépend de l'angle de vue différemment des régions diffuses. Hybrid-IBR obtient des résultats favorables par rapport aux approches concurrentes pour une grande variété de scènes en termes de qualité et/ou de vitesse.Bien que l'IBR soit une bonne solution de rendu, l'édition de celle-ci est difficile sans une décomposition en différents éléments : la géométrie, l'apparence des matériaux et l'éclairage de la scène. Pour notre deuxième contribution, \textbf{nous estimons explicitement les paramètres de matériaux à \emph{l'échelle de la scène} à partir d'un ensemble de photographies, pour permettre l'édition de la scène}. Alors que les solutions de photogrammétrie commerciales calculent la texture diffuse pour assister la création manuelle de matériaux, nous visons à créer \emph{automatiquement} des atlas de texture de matériaux à partir d'un ensemble d'images d'une scène. Nous nous appuyons sur les informations fournis par ces images et les transmettons à un réseau neuronal convolutif pour obtenir des cartes de matériaux pour chaque vue. En utilisant toutes ces prédictions, nous créons des atlas de texture de matériau cohérents pour toutes les vues en agrégeant les informations dans l'espace texture. Nous démontrons l'utilisation de notre atlas de texture de matériaux généré automatiquement pour rendre des scènes réelles avec un changement d’illumination et avec des objets virtuels insérés.L'apprentissage profond nécessite de grandes quantités de données variées. L'utilisation de données synthétiques est courante, mais l'utilisation du rendu traditionnel pour créer ces données prend du temps et offre une variabilité limitée. Nous proposons \textbf{une nouvelle approche basée sur le rendu neuronal qui apprend une représentation de scène neuronale avec paramètres variables, et l'utilise pour générer au vol de grandes quantités de données à un rythme beaucoup plus rapide}. Nous démontrons l'avantage d'utiliser le rendu neuronal par rapport au rendu traditionnel en termes de budget de temps, ainsi que pour l'apprentissage de tâches auxiliaires avec le même budget de calcul

    Hybrid Image-based Rendering for Free-view Synthesis

    Get PDF
    International audienceImage-based rendering (IBR) provides a rich toolset for free-viewpoint navigation in captured scenes. Many methods exist, usually with an emphasis either on image quality or rendering speed. In this paper we identify common IBR artifacts and combine the strengths of different algorithms to strike a good balance in the speed/quality tradeoff. First, we address the problem of visible color seams that arise from blending casually-captured input images by explicitly treating view-dependent effects. Second, we compensate for geometric reconstruction errors by refining per-view information using a novel clustering and filtering approach. Finally, we devise a practical hybrid IBR algorithm, which locally identifies and utilizes the rendering method best suited for an image region while retaining interactive rates. We compare our method against classical and modern (neural) approaches in indoor and outdoor scenes and demonstrate superiority in quality and/or speed

    Image-Based Rendering of Cars using Semantic Labels and Approximate Reflection Flow

    Get PDF
    International audienceImage-Based Rendering (IBR) has made impressive progress towards highly realistic, interactive 3D navigation for many scenes, including cityscapes. However, cars are ubiquitous in such scenes; multi-view stereo reconstruction provides proxy geometry for IBR, but has difficulty with shiny car bodies, and leaves holes in place of reflective, semi-transparent windows on cars. We present a new approach allowing free-viewpoint IBR of cars based on an approximate analytic reflection flow computation on curved windows. Our method has three components: a refinement step of reconstructed car geometry guided by semantic labels, that provides an initial approximation for missing window surfaces and a smooth completed car hull; an efficient reflection flow computation using an ellipsoid approximation of the curved car windows that runs in real-time in a shader and a reflection/background layer synthesis solution. These components allow plausible rendering of reflective, semi-transparent windows in free viewpoint navigation. We show results on several scenes casually captured with a single consumer-level camera, demonstrating plausible car renderings with significant improvement in visual quality over previous methods

    Deep scene-scale material estimation from multi-view indoor captures

    Get PDF
    International audienceThe movie and video game industries have adopted photogrammetry as a way to create digital 3D assets from multiple photographs of a real-world scene. But photogrammetry algorithms typically output an RGB texture atlas of the scene that only serves as visual guidance for skilled artists to create material maps suitable for physically-based rendering. We present a learning-based approach that automatically produces digital assets ready for physically-based rendering, by estimating approximate material maps from multi-view captures of indoor scenes that are used with retopologized geometry. We base our approach on a material estimation Convolutional Neural Network (CNN) that we execute on each input image. We leverage the view-dependent visual cues provided by the multiple observations of the scene by gathering, for each pixel of a given image, the color of the corresponding point in other images. This image-space CNN provides us with an ensemble of predictions, which we merge in texture space as the last step of our approach. Our results demonstrate that the recovered assets can be directly used for physically-based rendering and editing of real indoor scenes from any viewpoint and novel lighting. Our method generates approximate material maps in a fraction of time compared to the closest previous solutions

    Glossy Probe Reprojection for Interactive Global Illumination

    Get PDF
    International audienceRecent rendering advances dramatically reduce the cost of global illumination. But even with hardware acceleration, complex light paths with multiple glossy interactions are still expensive; our new algorithm stores these paths in precomputed light probes and reprojects them at runtime to provide interactivity. Combined with traditional light maps for diffuse lighting our approach interactively renders all light paths in static scenes with opaque objects. Naively reprojecting probes with glossy lighting is memory-intensive, requires efficient access to the correctly reflected radiance, and exhibits problems at occlusion boundaries in glossy reflections. Our solution addresses all these issues. To minimize memory, we introduce an adaptive light probe parameterization that allocates increased resolution for shinier surfaces and regions of higher geometric complexity. To efficiently sample glossy paths, our novel gathering algorithm reprojects probe texels in a view-dependent manner using efficient reflection estimation and a fast rasterization-based search. Naive probe reprojection often sharpens glossy reflections at occlusion boundaries, due to changes in parallax. To avoid this, we split the convolution induced by the BRDF into two steps: we precompute probes using a lower material roughness and apply an adaptive bilateral filter at runtime to reproduce the original surface roughness. Combining these elements, our algorithm interactively renders complex scenes while fitting in the memory, bandwidth, and computation constraints of current hardware

    Evaluation and Multivariate Analysis of Cowpea [Vigna unguiculata (L.) Walp] Germplasm for Selected Nutrients—Mining for Nutri-Dense Accessions

    Get PDF
    A total of 120 highly diverse cowpea [Vigna unguiculata (L.) Walp] genotypes, including indigenous and exotic lines, were evaluated for different biochemical traits using AOAC official methods of analysis and other standard methods. The results exhibited wide variability in the content of proteins (ranging from 19.4 to 27.9%), starch (from 27.5 to 42.7 g 100 g−1), amylose (from 9.65 to 21.7 g 100 g−1), TDF (from 13.7 to 21.1 g 100 g−1), and TSS (from 1.30 to 8.73 g 100 g−1). The concentration of anti-nutritional compounds like phenols and phytic acid ranged from 0.026 to 0.832 g 100 g−1 and 0.690 to 1.88 g 100 g−1, respectively. The correlation coefficient between the traits was calculated to understand the inter-trait relationship. Multivariate analysis (PCA and HCA) was performed to identify the major traits contributing to variability and group accessions with a similar profile. The first three principal components, i.e., PC1, PC2, and PC3, contributed to 62.7% of the variation, where maximum loadings were from starch, followed by protein, phytic acid, and dietary fiber. HCA formed six distinct clusters at a squared Euclidean distance of 5. Accessions in cluster I had high TDF and low TSS content, while cluster II was characterized by low amylose content. Accessions in cluster III had high starch, low protein, and phytic acid, whereas accessions in cluster IV contained high TSS, phenol, and low phytic acid. Cluster V was characterized by high protein, phytic acid, TSS, and phenol content and low starch content, and cluster VI had a high amount of amylose and low phenol content. Some nutri-dense accessions were identified from the above-mentioned clusters, such as EC169879 and IC201086 with high protein (>27%), TSS, amylose, and TDF content. These compositions are promising to provide practical support for developing high-value food and feed varieties using effective breeding strategies with a higher economic value

    Development and optimization of NIRS prediction models for simultaneous multi-trait assessment in diverse cowpea germplasm

    Get PDF
    Cowpea (Vigna unguiculata (L.) Walp.) is one such legume that can facilitate achieving sustainable nutrition and climate change goals. Assessing nutritional traits conventionally can be laborious and time-consuming. NIRS is a technique used to rapidly determine biochemical parameters for large germplasm. NIRS prediction models were developed to assess protein, starch, TDF, phenols, and phytic acid based on MPLS regression. Higher RSQexternal values such as 0.903, 0.997, 0.901, 0.706, and 0.955 were obtained for protein, starch, TDF, phenols, and phytic acid respectively. Models for all the traits displayed RPD values of >2.5 except phenols and low SEP indicating the excellent prediction of models. For all the traits worked, p-value ≥ 0.05 implied the accuracy and reliability score >0.8 (except phenol) ensured the applicability of the models. These prediction models will facilitate high throughput screening of large cowpea germplasm in a non-destructive way and the selection of desirable chemotypes in any genetic background with huge application in cowpea crop improvement programs across the world

    Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States

    Get PDF
    Short-term probabilistic forecasts of the trajectory of the COVID-19 pandemic in the United States have served as a visible and important communication channel between the scientific modeling community and both the general public and decision-makers. Forecasting models provide specific, quantitative, and evaluable predictions that inform short-term decisions such as healthcare staffing needs, school closures, and allocation of medical supplies. Starting in April 2020, the US COVID-19 Forecast Hub (https://covid19forecasthub.org/) collected, disseminated, and synthesized tens of millions of specific predictions from more than 90 different academic, industry, and independent research groups. A multimodel ensemble forecast that combined predictions from dozens of groups every week provided the most consistently accurate probabilistic forecasts of incident deaths due to COVID-19 at the state and national level from April 2020 through October 2021. The performance of 27 individual models that submitted complete forecasts of COVID-19 deaths consistently throughout this year showed high variability in forecast skill across time, geospatial units, and forecast horizons. Two-thirds of the models evaluated showed better accuracy than a naïve baseline model. Forecast accuracy degraded as models made predictions further into the future, with probabilistic error at a 20-wk horizon three to five times larger than when predicting at a 1-wk horizon. This project underscores the role that collaboration and active coordination between governmental public-health agencies, academic modeling teams, and industry partners can play in developing modern modeling capabilities to support local, state, and federal response to outbreaks

    The United States COVID-19 Forecast Hub dataset

    Get PDF
    Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and national, levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages
    corecore